4 research outputs found
Theoretical, Measured and Subjective Responsibility in Aided Decision Making
When humans interact with intelligent systems, their causal responsibility
for outcomes becomes equivocal. We analyze the descriptive abilities of a newly
developed responsibility quantification model (ResQu) to predict actual human
responsibility and perceptions of responsibility in the interaction with
intelligent systems. In two laboratory experiments, participants performed a
classification task. They were aided by classification systems with different
capabilities. We compared the predicted theoretical responsibility values to
the actual measured responsibility participants took on and to their subjective
rankings of responsibility. The model predictions were strongly correlated with
both measured and subjective responsibility. A bias existed only when
participants with poor classification capabilities relied less-than-optimally
on a system that had superior classification capabilities and assumed
higher-than-optimal responsibility. The study implies that when humans interact
with advanced intelligent systems, with capabilities that greatly exceed their
own, their comparative causal responsibility will be small, even if formally
the human is assigned major roles. Simply putting a human into the loop does
not assure that the human will meaningfully contribute to the outcomes. The
results demonstrate the descriptive value of the ResQu model to predict
behavior and perceptions of responsibility by considering the characteristics
of the human, the intelligent system, the environment and some systematic
behavioral biases. The ResQu model is a new quantitative method that can be
used in system design and can guide policy and legal decisions regarding human
responsibility in events involving intelligent systems
The Responsibility Quantification (ResQu) Model of Human Interaction with Automation
Intelligent systems and advanced automation are involved in information
collection and evaluation, in decision-making and in the implementation of
chosen actions. In such systems, human responsibility becomes equivocal.
Understanding human casual responsibility is particularly important when
intelligent autonomous systems can harm people, as with autonomous vehicles or,
most notably, with autonomous weapon systems (AWS). Using Information Theory,
we develop a responsibility quantification (ResQu) model of human involvement
in intelligent automated systems and demonstrate its applications on decisions
regarding AWS. The analysis reveals that human comparative responsibility to
outcomes is often low, even when major functions are allocated to the human.
Thus, broadly stated policies of keeping humans in the loop and having
meaningful human control are misleading and cannot truly direct decisions on
how to involve humans in intelligent systems and advanced automation. The
current model is an initial step in the complex goal to create a comprehensive
responsibility model, that will enable quantification of human causal
responsibility. It assumes stationarity, full knowledge regarding the
characteristic of the human and automation and ignores temporal aspects.
Despite these limitations, it can aid in the analysis of systems designs
alternatives and policy decisions regarding human responsibility in intelligent
systems and advanced automation
Objective and Subjective Responsibility of a Control-Room Worker
When working with AI and advanced automation, human responsibility for
outcomes becomes equivocal. We applied a newly developed responsibility
quantification model (ResQu) to the real world setting of a control room in a
dairy factory to calculate workers' objective responsibility in a common fault
scenario. We compared the results to the subjective assessments made by
different functions in the diary. The capabilities of the automation greatly
exceeded those of the human, and the optimal operator should have fully
complied with the indications of the automation. Thus, in this case, the
operator had no unique contribution, and the objective causal human
responsibility was zero. However, outside observers, such as managers, tended
to assign much higher responsibility to the operator, in a manner that
resembled aspects of the "fundamental attribution error". This, in turn, may
lead to unjustifiably holding operators responsible for adverse outcomes in
situations in which they rightly trusted the automation, and acted accordingly.
We demonstrate the use of the ResQu model for the analysis of human causal
responsibility in intelligent systems. The model can help calibrate exogenous
subjective responsibility attributions, aid system design, and guide policy and
legal decisions